toolbox of the curve clustering model Search Results


90
MathWorks Inc ascorbate pharmacokinetic model matlab code
Ascorbate Pharmacokinetic Model Matlab Code, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/ascorbate pharmacokinetic model matlab code/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
ascorbate pharmacokinetic model matlab code - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

95
MathWorks Inc amesim–simulink co-simulation model
Amesim–Simulink Co Simulation Model, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 95/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/amesim–simulink co-simulation model/product/MathWorks Inc
Average 95 stars, based on 1 article reviews
amesim–simulink co-simulation model - by Bioz Stars, 2026-04
95/100 stars
  Buy from Supplier

90
MathWorks Inc matlab 2019a
Matlab 2019a, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/matlab 2019a/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
matlab 2019a - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

90
MathWorks Inc function glme
Function Glme, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/function glme/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
function glme - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

90
MathWorks Inc matlab model
Matlab Model, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/matlab model/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
matlab model - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

90
MathWorks Inc linear discriminant analysis (lda) model
Linear Discriminant Analysis (Lda) Model, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/linear discriminant analysis (lda) model/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
linear discriminant analysis (lda) model - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

96
MathWorks Inc constitutive model
Constitutive Model, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 96/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/constitutive model/product/MathWorks Inc
Average 96 stars, based on 1 article reviews
constitutive model - by Bioz Stars, 2026-04
96/100 stars
  Buy from Supplier

90
MathWorks Inc statistical analysis toolbox
Statistical Analysis Toolbox, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/statistical analysis toolbox/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
statistical analysis toolbox - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

90
MathWorks Inc pde toolbox
Pde Toolbox, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/pde toolbox/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
pde toolbox - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

94
MathWorks Inc reinforcement learning models
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Reinforcement Learning Models, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 94/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/reinforcement learning models/product/MathWorks Inc
Average 94 stars, based on 1 article reviews
reinforcement learning models - by Bioz Stars, 2026-04
94/100 stars
  Buy from Supplier

90
MathWorks Inc video eyetracker toolbox
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Video Eyetracker Toolbox, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/video eyetracker toolbox/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
video eyetracker toolbox - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

90
MathWorks Inc matlab 7.14
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Matlab 7.14, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/matlab 7.14/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
matlab 7.14 - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

Image Search Results


Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with reinforcement history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).

Journal: Proceedings of the National Academy of Sciences of the United States of America

Article Title: Callousness, exploitativeness, and tracking of cooperation incentives in the human default network.

doi: 10.1073/pnas.2307221121

Figure Lengend Snippet: Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with reinforcement history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).

Article Snippet: Reinforcement learning models were fit using the variational Bayesian analysis toolbox in MATLAB, which leverages parameter estimates from the full sample to constrain individual parameter estimates, reducing the risk of misestimation in poorly performing participants (64).

Techniques: Modification